3 research outputs found

    Dynamic generalized normal distribution optimization for feature selection

    Get PDF
    High dimensionality of data represents a major problem that affects the accuracy of the classification. This problem related with classification is mainly resulted from the availability of irrelevant features. Feature selection represents a solution to a problem by selecting the most informative features and discard the irrelevant features. Generalized normal distribution optimization (GNDO) represents a newly developed optimization that confirmed its outperformance in comparison with well-known optimization algorithms on parameter extraction for photovoltaic models. As an optimization algorithm, however, GNDO suffers from degraded performance when dealing with a problem with a high dimensionality. The main problems of GNDO include exploitation problem by falling into local optima problem. Also, GNDO has solutions diversity problem when it deals with data with high dimensionality. To alleviate the drawbacks of this algorithm and solve feature selection problems, a local search algorithm (LSA) is used. The new algorithm is called dynamic generalized normal distribution optimization (DGNDO), which includes the following main improvements to GNDO: it can improve the best solution to solve the local optima problem, it can improve solution diversity by improving the randomly selected solution, and it can improve both exploration and exploitation combined. To confirm the outperformance and efficiency of the new DGNDO algorithm, DGNDO algorithm is applied on 20 benchmarked datasets from UCI repository of data. In addition, DGNDO algorithm results are compared with seven well-known optimization algorithms using number of evaluation metrics including classification, accuracy, fitness, the number of selected features, statistical results using Wilcoxon test and convergence curves. The obtained results reveal the superiority of DGNDO algorithm over all other competing algorithms

    Improved Reptile Search Optimization Algorithm using Chaotic map and Simulated Annealing for Feature Selection in Medical Filed

    Get PDF
    The increased volume of medical datasets has produced high dimensional features, negatively affecting machine learning (ML) classifiers. In ML, the feature selection process is fundamental for selecting the most relevant features and reducing redundant and irrelevant ones. The optimization algorithms demonstrate its capability to solve feature selection problems. Reptile Search Algorithm (RSA) is a new nature-inspired optimization algorithm that stimulates Crocodiles’ encircling and hunting behavior. The unique search of the RSA algorithm obtains promising results compared to other optimization algorithms. However, when applied to high-dimensional feature selection problems, RSA suffers from population diversity and local optima limitations. An improved metaheuristic optimizer, namely the Improved Reptile Search Algorithm (IRSA), is proposed to overcome these limitations and adapt the RSA to solve the feature selection problem. Two main improvements adding value to the standard RSA; the first improvement is to apply the chaos theory at the initialization phase of RSA to enhance its exploration capabilities in the search space. The second improvement is to combine the Simulated Annealing (SA) algorithm with the exploitation search to avoid the local optima problem. The IRSA performance was evaluated over 20 medical benchmark datasets from the UCI machine learning repository. Also, IRSA is compared with the standard RSA and state-of-the-art optimization algorithms, including Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Grasshopper Optimization algorithm (GOA) and Slime Mould Optimization (SMO). The evaluation metrics include the number of selected features, classification accuracy, fitness value, Wilcoxon statistical test (p-value), and convergence curve. Based on the results obtained, IRSA confirmed its superiority over the original RSA algorithm and other optimized algorithms on the majority of the medical datasets

    Improved Equilibrium Optimization Algorithm Using Elite Opposition-Based Learning and New Local Search Strategy for Feature Selection in Medical Datasets

    No full text
    The rapid growth in biomedical datasets has generated high dimensionality features that negatively impact machine learning classifiers. In machine learning, feature selection (FS) is an essential process for selecting the most significant features and reducing redundant and irrelevant features. In this study, an equilibrium optimization algorithm (EOA) is used to minimize the selected features from high-dimensional medical datasets. EOA is a novel metaheuristic physics-based algorithm and newly proposed to deal with unimodal, multi-modal, and engineering problems. EOA is considered as one of the most powerful, fast, and best performing population-based optimization algorithms. However, EOA suffers from local optima and population diversity when dealing with high dimensionality features, such as in biomedical datasets. In order to overcome these limitations and adapt EOA to solve feature selection problems, a novel metaheuristic optimizer, the so-called improved equilibrium optimization algorithm (IEOA), is proposed. Two main improvements are included in the IEOA: The first improvement is applying elite opposite-based learning (EOBL) to improve population diversity. The second improvement is integrating three novel local search strategies to prevent it from becoming stuck in local optima. The local search strategies applied to enhance local search capabilities depend on three approaches: mutation search, mutation–neighborhood search, and a backup strategy. The IEOA has enhanced the population diversity, classification accuracy, and selected features, and increased the convergence speed rate. To evaluate the performance of IEOA, we conducted experiments on 21 biomedical benchmark datasets gathered from the UCI repository. Four standard metrics were used to test and evaluate IEOA’s performance: the number of selected features, classification accuracy, fitness value, and p-value statistical test. Moreover, the proposed IEOA was compared with the original EOA and other well-known optimization algorithms. Based on the experimental results, IEOA confirmed its better performance in comparison to the original EOA and the other optimization algorithms, for the majority of the used datasets
    corecore